11 research outputs found

    HIGH PERFORMANCE MODELLING AND COMPUTING IN COMPLEX MEDICAL CONDITIONS: REALISTIC CEREBELLUM SIMULATION AND REAL-TIME BRAIN CANCER DETECTION

    Get PDF
    The personalized medicine is the medicine of the future. This innovation is supported by the ongoing technological development that will be crucial in this field. Several areas in the healthcare research require performant technological systems, which elaborate huge amount of data in real-time. By exploiting the High Performance Computing technologies, scientists want to reach the goal of developing accurate diagnosis and personalized therapies. To reach these goals three main activities have to be investigated: managing a great amount of data acquisition and analysis, designing computational models to simulate the patient clinical status, and developing medical support systems to provide fast decisions during diagnosis or therapies. These three aspects are strongly supported by technological systems that could appear disconnected. However, in this new medicine, they will be in some way connected. As far as the data are concerned, today people are immersed in technology, producing a huge amount of heterogeneous data. Part of these is characterized by a great medical potential that could facilitate the delineation of the patient health condition and could be integrated in our medical record facilitating clinical decisions. To ensure this process technological systems able to organize, analyse and share these information are needed. Furthermore, they should guarantee a fast data usability. In this contest HPC and, in particular, the multicore and manycore processors, will surely have a high importance since they are capable to spread the computational workload on different cores to reduce the elaboration times. These solutions are crucial also in the computational modelling, field where several research groups aim to implement models able to realistically reproduce the human organs behaviour to develop their simulators. They are called digital twins and allow to reproduce the organ activity of a specific patient to study the disease progression or a new therapy. Patient data will be the inputs of these models which will predict her/his condition, avoiding invasive and expensive exams. The technological support that a realistic organ simulator requires is significant from the computational point of view. For this reason, devices as GPUs, FPGAs, multicore devices or even supercomputers are needed. As an example in this field, the development of a cerebellar simulator exploiting HPC will be described in the second chapter of this work. The complexity of the realistic mathematical models used will justify such a technological choice to reach reduced elaboration times. This work is within the Human Brain Project that aims to run a complete realistic simulation of the human brain. Finally, these technologies have a crucial role in the medical support system development. Most of the times during surgeries, it is very important that a support system provides a real-time answer. Moreover, the fact that this answer is the result of the elaboration of a complex mathematical problem, makes HPC system essential also in this field. If environments such as surgeries are considered, it is more plausible that the computation is performed by local desktop systems able to elaborate the data directly acquired during the surgery. The third chapter of this thesis describes the development of a brain cancer detection system, exploiting GPUs. This support system, developed as part of the HELICoiD project, performs a real-time elaboration of the brain hyperspectral images, acquired during surgery, to provide a classification map which highlights the tumor. The neurosurgeon is facilitated in the tissue resection. In this field, the GPU has been crucial to provide a real-time elaboration. Finally, it is possible to assert that in most of the fields of the personalized medicine, HPC will have a crucial role since they consist in the elaboration of a great amount of data in reduced times, aiming to provide specific diagnosis and therapies for the patient

    High performant simulations of cerebellar Golgi cells activity

    No full text
    The use of High Performance Computing (HPC) technologies is gaining interest in the field of neuronal activity simulations. In fact, scientists’ main goal is to understand and reproduce cells behavior in a realistic way. This will allow undertaking in silico experiments, instead of in vivo ones, to test new medicines, to study cerebral pathologies and to discover innovative therapies. To this aim, two main requirements are necessary: neurons have to be described by realistic models and their simulation hopefully have to satisfy the real-time constraint. This last property is very hard to accommodate because models used in these works are very heavy from the computational point of view. For this reason, authors decide to exploit Graphic Processing Unit (GPU) technology to simulate the cellular activity of Golgi cells, which constitute the cerebellar cortex. This paper describes an efficient Golgi cell activity simulation performed using NVIDIA GPUs. Results show that simulation times are reduced from 41 hours to about 2 hours when simulating 400’000 different cell

    Towards the Simulation of a Realistic Large-Scale Spiking Network on a Desktop Multi-GPU System

    No full text
    The reproduction of the brain ’sactivity and its functionality is the main goal of modern neuroscience. To this aim, several models have been proposed to describe the activity of single neurons at different levels of detail. Then, single neurons are linked together to build a network, in order to reproduce complex behaviors. In the literature, different network-building rules and models have been described, targeting realistic distributions and connections of the neurons. In particular, the Granular layEr Simulator (GES) performs the granular layer network reconstruction considering biologically realistic rules to connect the neurons. Moreover, it simulates the network considering the Hodgkin–Huxley model. The work proposed in this paper adopts the network reconstruction model of GES and proposes a simulation module based on Leaky Integrate and Fire (LIF) model. This simulator targets the reproduction of the activity of large scale networks, exploiting the GPU technology to reduce the processing times. Experimental results show that a multi-GPU system reduces the simulation of a network with more than 1.8 million neurons from approximately 54 to 13 h

    The Human Brain Project: Parallel technologies for biologically accurate simulation of Granule cells

    No full text
    Studying and understanding human brain is one of the main challenges of 21st century scientists. The Human Brain Project was conceived for addressing this challenge in an innovative way, enabling collaborations between 112 partners spread in 24 European countries. The project is funded by the European Commission and will last until 2023. This paper describes the ongoing activity at one of the Italian units focused on innovative brain simulation through high performance computing technologies. Simulations concern realistic models of neurons belonging to the cerebellar cortex. Due to the level of biological realism, the computational complexity of this model is high, requiring suitable technologies. In this work, simulations have been conducted on high-end Graphical Processing Units (GPUs) and Field Programmable Gate Arrays (FPGAs). The first technology is used during model tuning and validation phases, while the latter allows to achieve real time elaboration, aiming at a possible development of embedded implantable systems. Simulations performance evaluations are discussed in the result section

    Exploiting multi-core and many-core architectures for efficient simulation of biologically realistic models of Golgi cells

    No full text
    Realistic neuronal activity simulation is of central importance for neuroscientists. These simulations allow to test new drugs, to study cerebral pathologies and to discover innovative therapies undertaking in silico experiments instead of in vivo ones. However, the processing times needed to simulate these models are very long. Therefore, high performance computing technologies should be explored in order to provide faster simulations. In this work, authors described high performant and realistic simulations of Golgi cells activity, based on the multi-core and the many-core approaches. Thus, simulations are performed on multi-core Intel processors and on NVIDIA Graphics Processing Units. Moreover, authors addresses the issue of portability among heterogeneous devices by proposing a solution based on OpenCL paradigm. The obtained results show that the considered parallel technologies, in particular the GPUs, are suitable for that kind of simulations and significantly reduce processing times

    Parallel K-Means Clustering for Brain Cancer Detection Using Hyperspectral Images

    Get PDF
    The precise delineation of brain cancer is a crucial task during surgery. There are several techniques employed during surgical procedures to guide neurosurgeons in the tumor resection. However, hyperspectral imaging (HSI) is a promising non-invasive and non-ionizing imaging technique that could improve and complement the currently used methods. The HypErspectraL Imaging Cancer Detection (HELICoiD) European project has addressed the development of a methodology for tumor tissue detection and delineation exploiting HSI techniques. In this approach, the K-means algorithm emerged in the delimitation of tumor borders, which is of crucial importance. The main drawback is the computational complexity of this algorithm. This paper describes the development of the K-means clustering algorithm on different parallel architectures, in order to provide real-time processing during surgical procedures. This algorithm will generate an unsupervised segmentation map that, combined with a supervised classification map, will offer guidance to the neurosurgeon during the tumor resection task. We present parallel K-means clustering based on OpenMP, CUDA and OpenCL paradigms. These algorithms have been validated through an in-vivo hyperspectral human brain image database. Experimental results show that the CUDA version can achieve a speed-up of ~ 150 × with respect to a sequential processing. The remarkable result obtained in this paper makes possible the development of a real-time classification system

    Deep learning and lung ultrasound for Covid-19 pneumonia detection and severity classification

    No full text
    The Covid-19 European outbreak in February 2020 has challenged the world's health systems, eliciting an urgent need for effective and highly reliable diagnostic instruments to help medical personnel. Deep learning (DL) has been demonstrated to be useful for diagnosis using both computed tomography (CT) scans and chest X-rays (CXR), whereby the former typically yields more accurate results. However, the pivoting function of a CT scan during the pandemic presents several drawbacks, including high cost and cross-contamination problems. Radiation-free lung ultrasound (LUS) imaging, which requires high expertise and is thus being underutilised, has demonstrated a strong correlation with CT scan results and a high reliability in pneumonia detection even in the early stages. In this study, we developed a system based on modern DL methodologies in close collaboration with Fondazione IRCCS Policlinico San Matteo's Emergency Department (ED) of Pavia. Using a reliable dataset comprising ultrasound clips originating from linear and convex probes in 2908 frames from 450 hospitalised patients, we conducted an investigation into detecting Covid-19 patterns and ranking them considering two severity scales. This study differs from other research projects by its novel approach involving four and seven classes. Patients admitted to the ED underwent 12 LUS examinations in different chest parts, each evaluated according to standardised severity scales. We adopted residual convolutional neural networks (CNNs), transfer learning, and data augmentation techniques. Hence, employing methodological hyperparameter tuning, we produced state-of-the-art results meeting F1 score levels, averaged over the number of classes considered, exceeding 98%, and thereby manifesting stable measurements over precision and recall

    Parallel Classification Pipelines for Skin Cancer Detection Exploiting Hyperspectral Imaging on Hybrid Systems

    No full text
    The early detection of skin cancer is of crucial importance to plan an effective therapy to treat the lesion. In routine medical practice, the diagnosis is based on the visual inspection of the lesion and it relies on the dermatologists’ expertise. After a first examination, the dermatologist may require a biopsy to confirm if the lesion is malignant or not. This methodology suffers from false positives and negatives issues, leading to unnecessary surgical procedures. Hyperspectral imaging is gaining relevance in this medical field since it is a non-invasive and non-ionizing technique, capable of providing higher accuracy than traditional imaging methods. Therefore, the development of an automatic classification system based on hyperspectral images could improve the medical practice to distinguish pigmented skin lesions from malignant, benign, and atypical lesions. Additionally, the system can assist general practitioners in first aid care to prevent noncritical lesions from reaching dermatologists, thereby alleviating the workload of medical specialists. In this paper is presented a parallel pipeline for skin cancer detection that exploits hyperspectral imaging. The computational times of the serial processing have been reduced by adopting multicore and many-core technologies, such as OpenMP and CUDA paradigms. Different parallel approaches have been combined, leading to the development of fifteen classification pipeline versions. Experimental results using in-vivo hyperspectral images show that a hybrid parallel approach is capable of classifying an image of 50 × 50 pixels with 125 bands in less than 1 s

    High-Level Synthesis of Multiclass SVM Using Code Refactoring to Classify Brain Cancer from Hyperspectral Images

    No full text
    Currently, high-level synthesis (HLS) methods and tools are a highly relevant area in the strategy of several leading companies in the field of system-on-chips (SoCs) and field programmable gate arrays (FPGAs). HLS facilitates the work of system developers, who benefit from integrated and automated design workflows, considerably reducing the design time. Although many advances have been made in this research field, there are still some uncertainties about the quality and performance of the designs generated with the use of HLS methodologies. In this paper, we propose an optimization of the HLS methodology by code refactoring using Xilinx SDSoCTM (Software-Defined System-On-Chip). Several options were analyzed for each alternative through code refactoring of a multiclass support vector machine (SVM) classifier written in C, using two different Zynq®-7000 SoC devices from Xilinx, the ZC7020 (ZedBoard) and the ZC7045 (ZC706). The classifier was evaluated using a brain cancer database of hyperspectral images. The proposed methodology not only reduces the required resources using less than 20% of the FPGA, but also reduces the power consumption −23% compared to the full implementation. The speedup obtained of 2.86× (ZC7045) is the highest found in the literature for SVM hardware implementations

    Accelerating the K-Nearest Neighbors Filtering Algorithm to Optimize the Real-Time Classification of Human Brain Tumor in Hyperspectral Images

    Get PDF
    The use of hyperspectral imaging (HSI) in the medical field is an emerging approach to assist physicians in diagnostic or surgical guidance tasks. However, HSI data processing involves very high computational requirements due to the huge amount of information captured by the sensors. One of the stages with higher computational load is the K-Nearest Neighbors (KNN) filtering algorithm. The main goal of this study is to optimize and parallelize the KNN algorithm by exploiting the GPU technology to obtain real-time processing during brain cancer surgical procedures. This parallel version of the KNN performs the neighbor filtering of a classification map (obtained from a supervised classifier), evaluating the different classes simultaneously. The undertaken optimizations and the computational capabilities of the GPU device throw a speedup up to 66.18× when compared to a sequential implementation
    corecore